A Wizard of Earthsea (1971) Ursula K. Le Guin How did it come to be that Harry Potter is the publishing sensation of the century, yet Ursula K. Le Guin's Earthsea is only a popular cult novel? Indeed, the comparisons and unintentional intertextuality with Harry Potter are entirely unavoidable when reading this book, and, in almost every respect, Ursula K. Le Guin's universe comes out the victor. In particular, the wizarding world that Le Guin portrays feels a lot more generous and humble than the class-ridden world of Hogwarts School of Witchcraft and Wizardry. Just to take one example from many, in Earthsea, magic turns out to be nurtured in a bottom-up manner within small village communities, in almost complete contrast to J. K. Rowling's concept of benevolent government departments and NGOs-like institutions, which now seems a far too New Labour for me. Indeed, imagine an entire world imbued with the kindly benevolence of Dumbledore, and you've got some of the moral palette of Earthsea. The gently moralising tone that runs through A Wizard of Earthsea may put some people off:
Vetch had been three years at the School and soon would be made Sorcerer; he thought no more of performing the lesser arts of magic than a bird thinks of flying. Yet a greater, unlearned skill he possessed, which was the art of kindness.Still, these parables aimed directly at the reader are fairly rare, and, for me, remain on the right side of being mawkish or hectoring. I'm thus looking forward to reading the next two books in the series soon.
Blood Meridian (1985) Cormac McCarthy Blood Meridian follows a band of American bounty hunters who are roaming the Mexican-American borderlands in the late 1840s. Far from being remotely swashbuckling, though, the group are collecting scalps for money and killing anyone who crosses their path. It is the most unsparing treatment of American genocide and moral depravity I have ever come across, an anti-Western that flouts every convention of the genre. Blood Meridian thus has a family resemblance to that other great anti-Western, Once Upon a Time in the West: after making a number of gun-toting films that venerate the American West (ie. his Dollars Trilogy), Sergio Leone turned his cynical eye to the western. Yet my previous paragraph actually euphemises just how violent Blood Meridian is. Indeed, I would need to be a much better writer (indeed, perhaps McCarthy himself) to adequately 0utline the tone of this book. In a certain sense, it's less than you read this book in a conventional sense, but rather that you are forced to witness successive chapters of grotesque violence... all occurring for no obvious reason. It is often said that books 'subvert' a genre and, indeed, I implied as such above. But the term subvert implies a kind of Puck-like mischievousness, or brings to mind court jesters licensed to poke fun at the courtiers. By contrast, however, Blood Meridian isn't funny in the slightest. There isn't animal cruelty per se, but rather wanton negligence of another kind entirely. In fact, recalling a particular passage involving an injured horse makes me feel physically ill. McCarthy's prose is at once both baroque in its language and thrifty in its presentation. As Philip Connors wrote back in 2007, McCarthy has spent forty years writing as if he were trying to expand the Old Testament, and learning that McCarthy grew up around the Church therefore came as no real surprise. As an example of his textual frugality, I often looked for greater precision in the text, finding myself asking whether who a particular 'he' is, or to which side of a fight some two men belonged to. Yet we must always remember that there is no precision to found in a gunfight, so this infidelity is turned into a virtue. It's not that these are fair fights anyway, or even 'murder': Blood Meridian is just slaughter; pure butchery. Murder is a gross understatement for what this book is, and at many points we are grateful that McCarthy spares us precision. At others, however, we can be thankful for his exactitude. There is no ambiguity regarding the morality of the puppy-drowning Judge, for example: a Colonel Kurtz who has been given free license over the entire American south. There is, thank God, no danger of Hollywood mythologising him into a badass hero. Indeed, we must all be thankful that it is impossible to film this ultra-violent book... Indeed, the broader idea of 'adapting' anything to this world is, beyond sick. An absolutely brutal read; I cannot recommend it highly enough.
Bodies of Light (2014) Sarah Moss Bodies of Light is a 2014 book by Glasgow-born Sarah Moss on the stirrings of women's suffrage within an arty clique in nineteenth-century England. Set in the intellectually smoggy cities of Manchester and London, this poignant book follows the studiously intelligent Alethia 'Ally' Moberly who is struggling to gain the acceptance of herself, her mother and the General Medical Council. You can read my full review from July.
House of Leaves (2000) Mark Z. Danielewski House of Leaves is a remarkably difficult book to explain. Although the plot refers to a fictional documentary about a family whose house is somehow larger on the inside than the outside, this quotidian horror premise doesn't explain the complex meta-commentary that Danielewski adds on top. For instance, the book contains a large number of pseudo-academic footnotes (many of which contain footnotes themselves), with references to scholarly papers, books, films and other articles. Most of these references are obviously fictional, but it's the kind of book where the joke is that some of them are not. The format, structure and typography of the book is highly unconventional too, with extremely unusual page layouts and styles. It's the sort of book and idea that should be a tired gimmick but somehow isn't. This is particularly so when you realise it seems specifically designed to create a fandom around it and to manufacturer its own 'cult' status, something that should be extremely tedious. But not only does this not happen, House of Leaves seems to have survived through two exhausting decades of found footage: The Blair Witch Project and Paranormal Activity are, to an admittedly lesser degree, doing much of the same thing as House of Leaves. House of Leaves might have its origins in Nabokov's Pale Fire or even Derrida's Glas, but it seems to have more in common with the claustrophobic horror of Cube (1997). And like all of these works, House of Leaves book has an extremely strange effect on the reader or viewer, something quite unlike reading a conventional book. It wasn't so much what I got out of the book itself, but how it added a glow to everything else I read, watched or saw at the time. An experience.
Milkman (2018) Anna Burns This quietly dazzling novel from Irish author Anna Burns is full of intellectual whimsy and oddball incident. Incongruously set in 1970s Belfast during The Irish Troubles, Milkman's 18-year-old narrator (known only as middle sister ), is the kind of dreamer who walks down the street with a Victorian-era novel in her hand. It's usually an error for a book that specifically mention other books, if only because inviting comparisons to great novels is grossly ill-advised. But it is a credit to Burns' writing that the references here actually add to the text and don't feel like they are a kind of literary paint by numbers. Our humble narrator has a boyfriend of sorts, but the figure who looms the largest in her life is a creepy milkman an older, married man who's deeply integrated in the paramilitary tribalism. And when gossip about the narrator and the milkman surfaces, the milkman beings to invade her life to a suffocating degree. Yet this milkman is not even a milkman at all. Indeed, it's precisely this kind of oblique irony that runs through this daring but darkly compelling book.
The First Fifteen Lives of Harry August (2014) Claire North Harry August is born, lives a relatively unremarkable life and finally dies a relatively unremarkable death. Not worth writing a novel about, I suppose. But then Harry finds himself born again in the very same circumstances, and as he grows from infancy into childhood again, he starts to remember his previous lives. This loop naturally drives Harry insane at first, but after finding that suicide doesn't stop the quasi-reincarnation, he becomes somewhat acclimatised to his fate. He prospers much better at school the next time around and is ultimately able to make better decisions about his life, especially when he just happens to know how to stay out of trouble during the Second World War. Yet what caught my attention in this 'soft' sci-fi book was not necessarily the book's core idea but rather the way its connotations were so intelligently thought through. Just like in a musical theme and varations, the success of any concept-driven book is far more a product of how the implications of the key idea are played out than how clever the central idea was to begin with. Otherwise, you just have another neat Borges short story: satisfying, to be sure, but in a narrower way. From her relatively simple premise, for example, North has divined that if there was a community of people who could remember their past lives, this would actually allow messages and knowledge to be passed backwards and forwards in time. Ah, of course! Indeed, this very mechanism drives the plot: news comes back from the future that the progress of history is being interfered with, and, because of this, the end of the world is slowly coming. Through the lives that follow, Harry sets out to find out who is passing on technology before its time, and work out how to stop them. With its gently-moralising romp through the salient historical touchpoints of the twentieth century, I sometimes got a whiff of Forrest Gump. But it must be stressed that this book is far less certain of its 'right-on' liberal credentials than Robert Zemeckis' badly-aged film. And whilst we're on the topic of other media, if you liked the underlying conceit behind Stuart Turton's The Seven Deaths of Evelyn Hardcastle yet didn't enjoy the 'variations' of that particular tale, then I'd definitely give The First Fifteen Lives a try. At the very least, 15 is bigger than 7. More seriously, though, The First Fifteen Lives appears to reflect anxieties about technology, particularly around modern technological accelerationism. At no point does it seriously suggest that if we could somehow possess the technology from a decade in the future then our lives would be improved in any meaningful way. Indeed, precisely the opposite is invariably implied. To me, at least, homo sapiens often seems to be merely marking time until we can blow each other up and destroying the climate whilst sleepwalking into some crisis that might precipitate a thermonuclear genocide sometimes seems to be built into our DNA. In an era of cli-fi fiction and our non-fiction newspaper headlines, to label North's insight as 'prescience' might perhaps be overstating it, but perhaps that is the point: this destructive and negative streak is universal to all periods of our violent, insecure species.
The Goldfinch (2013) Donna Tartt After Breaking Bad, the second biggest runaway success of 2014 was probably Donna Tartt's doorstop of a novel, The Goldfinch. Yet upon its release and popular reception, it got a significant number of bad reviews in the literary press with, of course, an equal number of predictable think pieces claiming this was sour grapes on the part of the cognoscenti. Ah, to be in 2014 again, when our arguments were so much more trivial. For the uninitiated, The Goldfinch is a sprawling bildungsroman that centres on Theo Decker, a 13-year-old whose world is turned upside down when a terrorist bomb goes off whilst visiting the Metropolitan Museum of Art, killing his mother among other bystanders. Perhaps more importantly, he makes off with a painting in order to fulfil a promise to a dying old man: Carel Fabritius' 1654 masterpiece The Goldfinch. For the next 14 years (and almost 800 pages), the painting becomes the only connection to his lost mother as he's flung, almost entirely rudderless, around the Western world, encountering an array of eccentric characters. Whatever the critics claimed, Tartt's near-perfect evocation of scenes, from the everyday to the unimaginable, is difficult to summarise. I wouldn't label it 'cinematic' due to her evocation of the interiority of the characters. Take, for example: Even the suggestion that my father had close friends conveyed a misunderstanding of his personality that I didn't know how to respond it's precisely this kind of relatable inner subjectivity that cannot be easily conveyed by film, likely is one of the main reasons why the 2019 film adaptation was such a damp squib. Tartt's writing is definitely not 'impressionistic' either: there are many near-perfect evocations of scenes, even ones we hope we cannot recognise from real life. In particular, some of the drug-taking scenes feel so credibly authentic that I sometimes worried about the author herself. Almost eight months on from first reading this novel, what I remember most was what a joy this was to read. I do worry that it won't stand up to a more critical re-reading (the character named Xandra even sounds like the pharmaceuticals she is taking), but I think I'll always treasure the first days I spent with this often-beautiful novel.
Beyond Black (2005) Hilary Mantel Published about five years before the hyperfamous Wolf Hall (2004), Hilary Mantel's Beyond Black is a deeply disturbing book about spiritualism and the nature of Hell, somewhat incongruously set in modern-day England. Alison Harte is a middle-aged physic medium who works in the various towns of the London orbital motorway. She is accompanied by her stuffy assistant, Colette, and her spirit guide, Morris, who is invisible to everyone but Alison. However, this is no gentle and musk-smelling world of the clairvoyant and mystic, for Alison is plagued by spirits from her past who infiltrate her physical world, becoming stronger and nastier every day. Alison's smiling and rotund persona thus conceals a truly desperate woman: she knows beyond doubt the terrors of the next life, yet must studiously conceal them from her credulous clients. Beyond Black would be worth reading for its dark atmosphere alone, but it offers much more than a chilling and creepy tale. Indeed, it is extraordinarily observant as well as unsettlingly funny about a particular tranche of British middle-class life. Still, the book's unnerving nature that sticks in the mind, and reading it noticeably changed my mood for days afterwards, and not necessarily for the best.
The Wall (2019) John Lanchester The Wall tells the story of a young man called Kavanagh, one of the thousands of Defenders standing guard around a solid fortress that envelopes the British Isles. A national service of sorts, it is Kavanagh's job to stop the so-called Others getting in. Lanchester is frank about what his wall provides to those who stand guard: the Defenders of the Wall are conscripted for two years on the Wall, with no exceptions, giving everyone in society a life plan and a story. But whilst The Wall is ostensibly about a physical wall, it works even better as a story about the walls in our mind. In fact, the book blends together of some of the most important issues of our time: climate change, increasing isolation, Brexit and other widening societal divisions. If you liked P. D. James' The Children of Men you'll undoubtedly recognise much of the same intellectual atmosphere, although the sterility of John Lanchester's dystopia is definitely figurative and textual rather than literal. Despite the final chapters perhaps not living up to the world-building of the opening, The Wall features a taut and engrossing narrative, and it undoubtedly warrants even the most cursory glance at its symbolism. I've yet to read something by Lanchester I haven't enjoyed (even his short essay on cheating in sports, for example) and will be definitely reading more from him in 2022.
The Only Story (2018) Julian Barnes The Only Story is the story of Paul, a 19-year-old boy who falls in love with 42-year-old Susan, a married woman with two daughters who are about Paul's age. The book begins with how Paul meets Susan in happy (albeit complicated) circumstances, but as the story unfolds, the novel becomes significantly more tragic and moving. Whilst the story begins from the first-person perspective, midway through the book it shifts into the second person, and, later, into the third as well. Both of these narrative changes suggested to me an attempt on the part of Paul the narrator (if not Barnes himself), to distance himself emotionally from the events taking place. This effect is a lot more subtle than it sounds, however: far more prominent and devastating is the underlying and deeply moving story about the relationship ends up. Throughout this touching book, Barnes uses his mastery of language and observation to avoid the saccharine and the maudlin, and ends up with a heart-wrenching and emotive narrative. Without a doubt, this is the saddest book I read this year.
sprintf
call with an unused argument.
And once again, this release gets a #ThankYouCRAN
mark as it was processed in a fully automated and intervention-free manner in a matter of minutes.
As usual, the NEWS entry follows.
CRANberries provides the usual summary of changes to the previous version. Please use the GitHub repo and its issues for any questions. If you like this or other open-source work I do, you can now sponsor me at GitHub.Changes in ttdo version 0.0.8 (2021-07-17)
- Expand sprintf template to suppress R warning
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.
when an Asker meets a Guesser, unpleasantness results. An Asker won t think it s rude to request two weeks in your spare room, but a Guess culture person will hear it as presumptuous and resent the agony involved in saying no. Your boss, asking for a project to be finished early, may be an overdemanding boor or just an Asker, who s assuming you might decline. If you re a Guesser, you ll hear it as an expectation.Askers should also be aware that there might be Guessers in their team. It can help to define clear guidelines about making requests (when do I expect an answer, under which budget/contract/responsibility does the request fall, what other task can be put aside to handle the urgent task?) Last, but not least, Making work visible has a lot of other proposals on how to visibilize and then deal with unplanned work.
.doctrees
from installed files was created via Arch s TODO list mechanism. These .doctree
files are caches generated by the Sphinx documentation generator when developing documentation so that Sphinx does not have to reparse all input files across runs. They should not be packaged, especially as they lead to the package being unreproducible as their pickled format contains unreproducible data. Jelle van der Waa and Eli Schwartz submitted various upstream patches to fix projects that install these by default.
Dimitry Andric was able to determine why the reproducibility status of FreeBSD s base.txz
depended on the number of CPU cores, attributing it to an optimisation made to the Clang C compiler [ ]. After further detailed discussion on the FreeBSD bug it was possible to get the binaries reproducible again [ ].
For the GNU Guix operating system, Vagrant Cascadian started a thread about collecting reproducibility metrics and Jan janneke Nieuwenhuizen posted that they had further reduced their bootstrap seed to 25% which is intended to reduce the amount of code to be audited to avoid potential compiler backdoors.
In openSUSE, Bernhard M. Wiedemann published his monthly Reproducible Builds status update as well as made the following changes within the distribution itself:
autogen
(Date issue)carla
(Timestamp in Windows Portable Executable executables)fonttosfnt/xorg-x11-fonts
(Address space layout randomization issue)fossil
(Date issue)gcc10 C++
(Link-time optimisation issue)grep
(Profile-guided optimisation issue)kubernetes1.18
(Remove Go build identifier)libjcat
(Remove certificate)lifelines
(Date issue)miredo
(Drop hostname)stressapptest
(Override date, user & host)reproducible-check
tool that reports on the reproducible status of installed packages on a running Debian system. They were subsequently all fixed by Chris Lamb [ ][ ][ ].
Timo R hling filed a wishlist bug against the debhelper
build tool impacting the reproducibility status of 100s of packages that use the CMake build system which led to a number of tests and next steps. [ ]
Chris Lamb contributed to a conversation regarding the nondeterministic execution of order of Debian maintainer scripts that results in the arbitrary allocation of UNIX group IDs, referencing the Tails operating system s approach this [ ]. Vagrant Cascadian also added to a discussion regarding verification formats for reproducible builds.
47 reviews of Debian packages were added, 37 were updated and 69 were removed this month adding to our knowledge about identified issues. Chris Lamb identified and classified a new uids_gids_in_tarballs_generated_by_cmake_kde_package_app_templates
issue [ ] and updated the paths_vary_due_to_usrmerge as deterministic
issue, and Vagrant Cascadian updated the cmake_rpath_contains_build_path
and gcc_captures_build_path
issues. [ ][ ][ ].
Lastly, Debian Developer Bill Allombert started a mailing list thread regarding setting the -fdebug-prefix-map
command-line argument via an environment variable and Holger Levsen also filed three bugs against the debrebuild
Debian package rebuilder tool (#961861, #961862 & #961864).
SOURCE_DATE_EPOCH
git log
example to another section [ ]. Chris Lamb also limited the number of news posts to avoid showing items from (for example) 2017 [ ].
strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build. It is used automatically in most Debian package builds. This month, Mattia Rizzolo bumped the debhelper
compatibility level to 13 [ ] and adjusted a related dependency to avoid potential circular dependency [ ].
autogen
(race condition)cockpit
(date)fossil
(date)libnvidia-container
(date)libv3270
( date)netcdf-fortran
.seqtools
.python-pauvre
.petitboot
.fonts-anonymous-pro
.python-pyqtgraph
(forwarded upstream)libqmi
.tkabber-plugins
.python-stem
.golang-v2ray-core
.critcl
.gftl
.libmbim
.neovim-qt
.golang-github-viant-toolbox
.libxml2
(random data corruption)frr
(build fails on single-processor machines), ghc-yesod-static/git-annex
(a filesystem ordering issue) and ooRexx
(ASLR-related issue).
147
, 148
and 149
to Debian and made the following changes:
/Info
stanza). (#150)jsondiff
version 1.2.0. (#159)File.recognizes
that checks candidates against file(1)
. [ ]subprocess.check_output
by using a wrapper. (#151)AbstractMissingType
type instead of remembering to check for both types of missing files. [ ].changes
, .dsc
and .buildinfo
comparators. [ ]f-strings
to tidy up code [ ][ ] and remove explicit u"unicode"
strings [ ].--new-file
option when comparing directories by merging DirectoryContainer.compare
and Container.compare
. (#180)--diff-mask=REGEX
. (!51)--html-dir
presenter format. [ ]--html-dir
format. [ ][ ]tlsh
fuzzy-matching library during tests [ ] and tweaked the build system to remove an unwanted .build
directory [ ]. For the GNU Guix distribution Vagrant Cascadian updated the version of diffoscope to version 147 [ ] and later 148 [ ].
tests.reproducible-builds.org
. Amongst many other tasks, this tracks the status of our reproducibility efforts across many distributions as well as identifies any regressions that have been introduced. This month, Holger Levsen made the following changes:
rsync2buildinfos.debian.net
every night. [ ].buildinfo
files to include a fix regarding comparing source vs. binary package versions. [ ]archlinux_html_pages
, openwrt_rebuilder_today
and openwrt_rebuilder_future
to known broken jobs. [ ]<meta>
header to refresh the page every 5 minutes. [ ]fixfilepath
on bullseye, to get better data about the ftbfs_due_to_f-file-prefix-map
categorised issue.
Lastly, the usual build node maintenance was performed by Holger Levsen [ ][ ], Mattia Rizzolo [ ] and Vagrant Cascadian [ ][ ][ ][ ][ ].
#reproducible-builds
on irc.oftc.net
.
rb-general@lists.reproducible-builds.org
This month s report was written by Bernhard M. Wiedemann, Chris Lamb, Eli Schwartz, Holger Levsen, Jelle van der Waa and Vagrant Cascadian. It was subsequently reviewed by a bunch of Reproducible Builds folks on IRC and the mailing list.
streamFn :: Stream Int -> Stream Int
streamFn = streamFilter (<15)
. streamFilter (>5)
. streamMap (*2)
Our system is distributed: we take a stream-processing program and
partition it into sub-programs, which are distributed to and run on separate
nodes (perhaps cloud instances, or embedded devices like Raspberry Pis etc.).
In order to do that, we need to be able to manipulate the stream-processing
program as data. We've initially opted for a graph data-structure, with the
vertices in the graph defined as
data StreamVertex = StreamVertex
vertexId :: Int
, operator :: StreamOperator
, parameters :: [String]
, intype :: String
, outtype :: String
deriving (Eq,Show)
A stream-processing program encoded this way, equivalent to the first example
path [ StreamVertex 0 Map ["(*2)"] "Int" "Int"
, StreamVertex 1 Filter ["(>5)"] "Int" "Int"
, StreamVertex 2 Filter ["(<15)"] "Int" "Int"
]
We can easily manipulate instances of such types, rewrite them, partition them
and generate code from them. Unfortunately, this is quite a departure from the
first simple code example from the perspective of a user writing their program.
Template Haskell gives us the ability to manipulate code as a data structure,
and also to inspect names to gather information about them (their type, etc.).
I started looking at TH to see if we could build something where the
user-supplied program was as close to that first case as possible.
TH limitations
There are two reasons that we can't easily manipulate a stream-processing
definition written as in the first example. The following expressions are
equivalent, in some sense, but are not equal, and so yield completely
different expression trees when quasi-quoted:
[ streamFilter (<15) . streamFilter (>5) . streamMap (*2) ]
\s -> streamFilter (<15) (streamFilter (>5) (streamMap (*2) s)) ]
[ streamMap (*2) >>> streamFilter (>5) >>> streamFilter (<15) ]
[ \s -> s & streamMap (*2) & streamFilter (>5) & streamFilter (<15) ]
[ streamFn ] -- a named expression, defined outside the quasi-quotes
In theory, reify
can give you the definition of a function from its
name, but in practice it doesn't, because this was never implemented.
So at the very least we would need to insist that a user included the
entirety of a stream-processing program within quasi-quotes, and not
split it up into separate bits, with some bits defined outside the
quotes and references within (as in the last case above). We would
probably have to insist on a consistent approach for composing operators
together, such as always use (.)
and never >>>
, &
, etc. which is
limiting.
Incremental approach
After a while ruminating on this, and before moving onto something else,
I thought I'd try approaching it from the other side. Could I introduce
some TH into the existing approach, and improve it? The first thing I've
tried is to change the parameters
field to TH's ExpQ
, meaning the
map instance example above would be
StreamVertex 0 Map [ [ (*2) ] ] "Int" "Int"
I worked this through. It's an incremental improvement ease and clarity
for the user writing a stream-processing program. It catches a class of
programming bugs that would otherwise slip through: the expressions in
the brackets have to be syntactically valid (although they aren't type
checked). Some of the StrIoT internals are also much improved,
particularly the logical operator. Here's an excerpt from a rewrite
rule that involves composing code embedded in strings, dealing with all
the escaping rules and hoping we've accounted for all possible incoming
expression encodings:
let f' = "(let f = ("++f++"); p = ("++p++"); g = ("++g++") in\
\ \\ (a,b) v -> (f a v, if p v a then g b v else b))"
a' = "("++a++","++b++")"
q' = "(let p = ("++p++"); q = ("++q++") in \\v (y,z) -> p v y && q v z)"
And the same section after, manipulating ExpQ types:
let f' = [ \ (a,b) v -> ($(f) a v, if $(p) v a then $(g) b v else b) ]
a' = [ ($(a), $(b)) ]
q' = [ \v (y,z) -> $(p) v y && $(q) v z ]
I think the code-generation part of StrIoT could be radically refactored
to take advantage of this change but I have not made huge inroads into that.
Next steps
This is, probably, where I am going to stop. This work is very interesting
to me but not the main thrust of my research. But incrementally improving
the representation gave me some ideas of what I could try next:
intype
and outtype
could be TH Type
s instead of String
s.
This would catch some simple problems like typos, etc., but we could
possibly go further, andparameters
is a list, because the different stream operators have
different arities. streamFilter
has one parameter (the filter
predicate), so the list should have one element in that case, but
streamExpand
has none, so it should be empty.
We could collapse this to a single ExpQ
, which encoded however
many parameters are necessary, either in an internal list, or parameters
expression was actually a call to the relevant operator with its
parameters supplied.data StreamVertex = StreamVertex
vertexId :: Int
, opAndParams :: ExpQ
deriving (Eq,Show)
Example instances might be
StreamVertex 0 [ streamMap (*2) ]
StreamVertex 1 [ streamExpand ]
StreamVertex 2 [ streamScan (\c _ -> c+1) 0 ]
The vertexId
field is a bit of wart, but we require that due to
the graph data structure that we are using. A change there could
eliminate it, too. By this point we are not that far away from where
we started, and certainly much closer to the "pure" function application
in the very first example.
sip:657837644.522827@192.168.169.170Now if only jami would reduce its memory usage, I could even recommend this setup to others. :) As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.
https://zoom.us/j/123456789
https://zoom.us/wc/join/123456789
Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.149 Safari/537.36
.exe
file. Cancel the download and you should now see the infamous
"join from your browser" link.
Upon closer inspection, it seem you can get to the web client by changing the
meeting's URL. The zoom meeting link you have probably look like this:
https://zoom.us/j/123456789
https://zoom.us/wc/join/123456789
2019/20 | 2018/19 | 2017/18 | 2016/17 | 2015/16 | 2014/15 | 2013/14 | 2012/13 | 2011/12 | 2010/11 | 2009/10 | 2008/09 | 2007/08 | 2006/07 | 2005/06 | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
number of (partial) days | 21 | 33 | 29 | 30 | 17 | 24 | 30 | 23 | 25 | 30 | 30 | 37 | 29 | 17 | 25 |
Dam ls | 0 | 2 | 8 | 4 | 4 | 9 | 29 | 4 | 10 | 23 | 16 | 10 | 5 | 10 | 10 |
Diedamskopf | 21 | 26 | 19 | 23 | 12 | 13 | 1 | 19 | 14 | 4 | 13 | 23 | 24 | 4 | 15 |
Warth/Schr cken | 0 | 5 | 2 | 3 | 1 | 2 | 0 | 0 | 1 | 3 | 1 | 4 | 0 | 3 | 0 |
total meters of altitude | 160797 | 296308 | 266158 | 269819 | 138037 | 224909 | 274706 | 203562 | 228588 | 203918 | 202089 | 226774 | 219936 | 74096 | 124634 |
highscore | 9375 | 10850 | 11116 | 12245 | 11015 | 13278 | 12848m | 13885m | 13076m | 10976m | 11888m | 11272m | 12108m | 8321m | 10247m |
# of runs | 374 | 701 | 616 | 634 | 354 | 530 | 597 | 468 | 516 | 449 | 462 | 551 | 503 | 189 | 309 |
SC
) but also an
encryption subkey (marked E
), a separate signature key (S
), and two
authentication keys (marked A
) which I use as RSA keys to log into
servers using SSH, thanks to the
Monkeysphere project.
pub rsa4096/792152527B75921E 2009-05-29 [SC] [expires: 2018-04-19]
8DC901CE64146C048AD50FBB792152527B75921E
uid [ultimate] Antoine Beaupr <anarcat@anarc.at>
uid [ultimate] Antoine Beaupr <anarcat@koumbit.org>
uid [ultimate] Antoine Beaupr <anarcat@orangeseeds.org>
uid [ultimate] Antoine Beaupr <anarcat@debian.org>
sub rsa2048/B7F648FED2DF2587 2012-07-18 [A]
sub rsa2048/604E4B3EEE02855A 2012-07-20 [A]
sub rsa4096/A51D5B109C5A5581 2009-05-29 [E]
sub rsa2048/3EA1DDDDB261D97B 2017-08-23 [S]
All the subkeys (sub
) and identities (uid
) are bound by the main
certification key using cryptographic self-signatures. So while an
attacker stealing a private subkey can spoof signatures in my name or
authenticate to other servers, that key can always be revoked by the
main certification key. But if the certification key gets stolen, all
bets are off: the attacker can create or revoke identities or subkeys as
they wish. In a catastrophic scenario, an attacker could even steal the
key and remove your copies, taking complete control of the key, without
any possibility of recovery. Incidentally, this is why it is so
important to generate a revocation certificate and store it offline.
So by moving the certification key offline, we reduce the attack surface
on the OpenPGP trust chain: day-to-day keys (e.g. email encryption or
signature) can stay online but if they get stolen, the certification key
can revoke those keys without having to revoke the main certification
key as well. Note that a stolen encryption key is a different problem:
even if we revoke the encryption subkey, this will only affect future
encrypted messages. Previous messages will be readable by the attacker
with the stolen subkey even if that subkey gets revoked, so the benefits
of revoking encryption certificates are more limited.
--iter-time
argument when creating
a LUKS partition to increase key-derivation delay, which makes
brute-forcing much harder. Indeed, GnuPG 2.x doesn't
have a run-time option to configure the
key-derivation algorithm, although a
patch was introduced recently to make the
delay configurable at compile time in gpg-agent
, which is now
responsible for all secret key operations.
The downside of external volumes is complexity: GnuPG makes it difficult
to extract secrets out of its keyring, which makes the first setup
tricky and error-prone. This is easier in the 2.x series thanks to the
new storage system and the associated keygrip
files, but it still
requires arcane knowledge of GPG internals. It is also inconvenient to
use secret keys stored outside your main keyring when you actually do
need to use them, as GPG doesn't know where to find those keys anymore.
Another option is to set up a separate air-gapped system to perform
certification operations. An example is the PGP clean
room project,
which is a live system based on Debian and designed by DD Daniel Pocock
to operate an OpenPGP and X.509 certificate authority using commodity
hardware. The basic principle is to store the secrets on a different
machine that is never connected to the network and, therefore, not
exposed to attacks, at least in theory. I have personally discarded that
approach because I feel air-gapped systems provide a false sense of
security: data eventually does need to come in and out of the system,
somehow, even if only to propagate signatures out of the system, which
exposes the system to attacks.
System updates are similarly problematic: to keep the system secure,
timely security updates need to be deployed to the air-gapped system. A
common use pattern is to share data through USB keys, which introduce a
vulnerability where attacks like
BadUSB can infect the air-gapped
system. From there, there is a multitude of exotic ways of exfiltrating
the data using
LEDs,
infrared
cameras,
or the good old
TEMPEST
attack. I therefore concluded the complexity tradeoffs of an air-gapped
system are not worth it. Furthermore, the workflow for air-gapped
systems is complex: even though PGP clean room went a long way, it's
still lacking even simple scripts that allow signing or transferring
keys, which is a problem shared by the external LUKS storage approach.
keytocard
command
in the --edit-key
interface), whereas moving private key material to a
LUKS-encrypted device or air-gapped computer is more complex.
Keycards are also useful if you operate on multiple computers. A common
problem when using GnuPG on multiple machines is how to safely copy and
synchronize private key material among different devices, which
introduces new security problems. Indeed, a "good rule of thumb in a
forensics lab",
according
to Robert J. Hansen on the GnuPG mailing list, is to "store the minimum
personal data possible on your systems". Keycards provide the best of
both worlds here: you can use your private key on multiple computers
without actually storing it in multiple places. In fact, Mike Gerwitz
went as far as
saying:
For users that need their GPG key on multiple boxes, I consider a smartcard to be essential. Otherwise, the user is just furthering her risk of compromise.
Smartcards are useful. They ensure that the private half of your key is never on any hard disk or other general storage device, and therefore that it cannot possibly be stolen (because there's only one possible copy of it). Smartcards are a pain in the ass. They ensure that the private half of your key is never on any hard disk or other general storage device but instead sits in your wallet, so whenever you need to access it, you need to grab your wallet to be able to do so, which takes more effort than just firing up GnuPG. If your laptop doesn't have a builtin cardreader, you also need to fish the reader from your backpack or wherever, etc."Smartcards" here refer to older OpenPGP cards that relied on the IEC 7816 smartcard connectors and therefore needed a specially-built smartcard reader. Newer keycards simply use a standard USB connector. In any case, it's true that having an external device introduces new issues: attackers can steal your keycard, you can simply lose it, or wash it with your dirty laundry. A laptop or a computer can also be lost, of course, but it is much easier to lose a small USB keycard than a full laptop and I have yet to hear of someone shoving a full laptop into a washing machine. When you lose your keycard, unless a separate revocation certificate is available somewhere, you lose complete control of the key, which is catastrophic. But, even if you revoke the lost key, you need to create a new one, which involves rebuilding the web of trust for the key a rather expensive operation as it usually requires meeting other OpenPGP users in person to exchange fingerprints. You should therefore think about how to back up the certification key, which is a problem that already exists for online keys; of course, everyone has a revocation certificates and backups of their OpenPGP keys... right? In the keycard scenario, backups may be multiple keycards distributed geographically. Note that, contrary to an air-gapped system, a key generated on a keycard cannot be backed up, by design. For subkeys, this is not a problem as they do not need to be backed up (except encryption keys). But, for a certification key, this means users need to generate the key on the host and transfer it to the keycard, which means the host is expected to have enough entropy to generate cryptographic-strength random numbers, for example. Also consider the possibility of combining different approaches: you could, for example, use a keycard for day-to-day operation, but keep a backup of the certification key on a LUKS-encrypted offline volume. Keycards introduce a new element into the trust chain: you need to trust the keycard manufacturer to not have any hostile code in the key's firmware or hardware. In addition, you need to trust that the implementation is correct. Keycards are harder to update: the firmware may be deliberately inaccessible to the host for security reasons or may require special software to manipulate. Keycards may be slower than the CPU in performing certain operations because they are small embedded microcontrollers with limited computing power. Finally, keycards may encourage users to trust multiple machines with their secrets, which works against the "minimum personal data" principle. A completely different approach called the trusted physical console (TPC) does the opposite: instead of trying to get private key material onto all of those machines, just have them on a single machine that is used for everything. Unlike a keycard, the TPC is an actual computer, say a laptop, which has the advantage of needing no special procedure to manage keys. The downside is, of course, that you actually need to carry that laptop everywhere you go, which may be problematic, especially in some corporate environments that restrict bringing your own devices.
export GNUPGHOME=$(mktemp -d)
gpg --generate-key
gpg --edit-key UID
key
command to select the first subkey, then copy it to
the keycard (you can also use the addcardkey
command to just
generate a new subkey directly on the keycard):
gpg> key 1
gpg> keytocard
save
command, which will
remove the local copy of the private key, so the keycard will be the
only copy of the secret key. Otherwise use the quit
command to
save the key on the keycard, but keep the secret key in your normal
keyring; answer "n" to "save changes?" and "y" to "quit without
saving?" . This way the keycard is a backup of your secret key.$GNUPGHOME
)--list-secret-keys
will show it as
sec>
(or ssb>
for subkeys) instead of the usual sec
keyword. If
the key is completely missing (for example, if you moved it to a LUKS
container), the #
sign is used instead. If you need to use a key from
a keycard backup, you simply do gpg --card-edit
with the key plugged
in, then type the fetch
command at the prompt to fetch the public key
that corresponds to the private key on the keycard (which stays on the
keycard). This is the same procedure as the one to use the secret key
on another
computer.
This article first appeared in the Linux Weekly News.
SC
) but also an
encryption subkey (marked E
), a separate signature key (S
), and two
authentication keys (marked A
) which I use as RSA keys to log into
servers using SSH, thanks to the
Monkeysphere project.
pub rsa4096/792152527B75921E 2009-05-29 [SC] [expires: 2018-04-19]
8DC901CE64146C048AD50FBB792152527B75921E
uid [ultimate] Antoine Beaupr <anarcat@anarc.at>
uid [ultimate] Antoine Beaupr <anarcat@koumbit.org>
uid [ultimate] Antoine Beaupr <anarcat@orangeseeds.org>
uid [ultimate] Antoine Beaupr <anarcat@debian.org>
sub rsa2048/B7F648FED2DF2587 2012-07-18 [A]
sub rsa2048/604E4B3EEE02855A 2012-07-20 [A]
sub rsa4096/A51D5B109C5A5581 2009-05-29 [E]
sub rsa2048/3EA1DDDDB261D97B 2017-08-23 [S]
All the subkeys (sub
) and identities (uid
) are bound by the main
certification key using cryptographic self-signatures. So while an
attacker stealing a private subkey can spoof signatures in my name or
authenticate to other servers, that key can always be revoked by the
main certification key. But if the certification key gets stolen, all
bets are off: the attacker can create or revoke identities or subkeys as
they wish. In a catastrophic scenario, an attacker could even steal the
key and remove your copies, taking complete control of the key, without
any possibility of recovery. Incidentally, this is why it is so
important to generate a revocation certificate and store it offline.
So by moving the certification key offline, we reduce the attack surface
on the OpenPGP trust chain: day-to-day keys (e.g. email encryption or
signature) can stay online but if they get stolen, the certification key
can revoke those keys without having to revoke the main certification
key as well. Note that a stolen encryption key is a different problem:
even if we revoke the encryption subkey, this will only affect future
encrypted messages. Previous messages will be readable by the attacker
with the stolen subkey even if that subkey gets revoked, so the benefits
of revoking encryption certificates are more limited.
--iter-time
argument when creating
a LUKS partition to increase key-derivation delay, which makes
brute-forcing much harder. Indeed, GnuPG 2.x doesn't
have a run-time option to configure the
key-derivation algorithm, although a
patch was introduced recently to make the
delay configurable at compile time in gpg-agent
, which is now
responsible for all secret key operations.
The downside of external volumes is complexity: GnuPG makes it difficult
to extract secrets out of its keyring, which makes the first setup
tricky and error-prone. This is easier in the 2.x series thanks to the
new storage system and the associated keygrip
files, but it still
requires arcane knowledge of GPG internals. It is also inconvenient to
use secret keys stored outside your main keyring when you actually do
need to use them, as GPG doesn't know where to find those keys anymore.
Another option is to set up a separate air-gapped system to perform
certification operations. An example is the PGP clean
room project,
which is a live system based on Debian and designed by DD Daniel Pocock
to operate an OpenPGP and X.509 certificate authority using commodity
hardware. The basic principle is to store the secrets on a different
machine that is never connected to the network and, therefore, not
exposed to attacks, at least in theory. I have personally discarded that
approach because I feel air-gapped systems provide a false sense of
security: data eventually does need to come in and out of the system,
somehow, even if only to propagate signatures out of the system, which
exposes the system to attacks.
System updates are similarly problematic: to keep the system secure,
timely security updates need to be deployed to the air-gapped system. A
common use pattern is to share data through USB keys, which introduce a
vulnerability where attacks like
BadUSB can infect the air-gapped
system. From there, there is a multitude of exotic ways of exfiltrating
the data using
LEDs,
infrared
cameras,
or the good old
TEMPEST
attack. I therefore concluded the complexity tradeoffs of an air-gapped
system are not worth it. Furthermore, the workflow for air-gapped
systems is complex: even though PGP clean room went a long way, it's
still lacking even simple scripts that allow signing or transferring
keys, which is a problem shared by the external LUKS storage approach.
keytocard
command
in the --edit-key
interface), whereas moving private key material to a
LUKS-encrypted device or air-gapped computer is more complex.
Keycards are also useful if you operate on multiple computers. A common
problem when using GnuPG on multiple machines is how to safely copy and
synchronize private key material among different devices, which
introduces new security problems. Indeed, a "good rule of thumb in a
forensics lab",
according
to Robert J. Hansen on the GnuPG mailing list, is to "store the minimum
personal data possible on your systems". Keycards provide the best of
both worlds here: you can use your private key on multiple computers
without actually storing it in multiple places. In fact, Mike Gerwitz
went as far as
saying:
For users that need their GPG key on multiple boxes, I consider a smartcard to be essential. Otherwise, the user is just furthering her risk of compromise.
Smartcards are useful. They ensure that the private half of your key is never on any hard disk or other general storage device, and therefore that it cannot possibly be stolen (because there's only one possible copy of it). Smartcards are a pain in the ass. They ensure that the private half of your key is never on any hard disk or other general storage device but instead sits in your wallet, so whenever you need to access it, you need to grab your wallet to be able to do so, which takes more effort than just firing up GnuPG. If your laptop doesn't have a builtin cardreader, you also need to fish the reader from your backpack or wherever, etc."Smartcards" here refer to older OpenPGP cards that relied on the IEC 7816 smartcard connectors and therefore needed a specially-built smartcard reader. Newer keycards simply use a standard USB connector. In any case, it's true that having an external device introduces new issues: attackers can steal your keycard, you can simply lose it, or wash it with your dirty laundry. A laptop or a computer can also be lost, of course, but it is much easier to lose a small USB keycard than a full laptop and I have yet to hear of someone shoving a full laptop into a washing machine. When you lose your keycard, unless a separate revocation certificate is available somewhere, you lose complete control of the key, which is catastrophic. But, even if you revoke the lost key, you need to create a new one, which involves rebuilding the web of trust for the key a rather expensive operation as it usually requires meeting other OpenPGP users in person to exchange fingerprints. You should therefore think about how to back up the certification key, which is a problem that already exists for online keys; of course, everyone has a revocation certificates and backups of their OpenPGP keys... right? In the keycard scenario, backups may be multiple keycards distributed geographically. Note that, contrary to an air-gapped system, a key generated on a keycard cannot be backed up, by design. For subkeys, this is not a problem as they do not need to be backed up (except encryption keys). But, for a certification key, this means users need to generate the key on the host and transfer it to the keycard, which means the host is expected to have enough entropy to generate cryptographic-strength random numbers, for example. Also consider the possibility of combining different approaches: you could, for example, use a keycard for day-to-day operation, but keep a backup of the certification key on a LUKS-encrypted offline volume. Keycards introduce a new element into the trust chain: you need to trust the keycard manufacturer to not have any hostile code in the key's firmware or hardware. In addition, you need to trust that the implementation is correct. Keycards are harder to update: the firmware may be deliberately inaccessible to the host for security reasons or may require special software to manipulate. Keycards may be slower than the CPU in performing certain operations because they are small embedded microcontrollers with limited computing power. Finally, keycards may encourage users to trust multiple machines with their secrets, which works against the "minimum personal data" principle. A completely different approach called the trusted physical console (TPC) does the opposite: instead of trying to get private key material onto all of those machines, just have them on a single machine that is used for everything. Unlike a keycard, the TPC is an actual computer, say a laptop, which has the advantage of needing no special procedure to manage keys. The downside is, of course, that you actually need to carry that laptop everywhere you go, which may be problematic, especially in some corporate environments that restrict bringing your own devices.
export GNUPGHOME=$(mktemp -d)
gpg --generate-key
gpg --edit-key UID
key
command to select the first subkey, then copy it to
the keycard (you can also use the addcardkey
command to just
generate a new subkey directly on the keycard):
gpg> key 1
gpg> keytocard
save
command, which will
remove the local copy of the private key, so the keycard will be the
only copy of the secret key. Otherwise use the quit
command to
save the key on the keycard, but keep the secret key in your normal
keyring; answer "n" to "save changes?" and "y" to "quit without
saving?" . This way the keycard is a backup of your secret key.$GNUPGHOME
)--list-secret-keys
will show it as
sec>
(or ssb>
for subkeys) instead of the usual sec
keyword. If
the key is completely missing (for example, if you moved it to a LUKS
container), the #
sign is used instead. If you need to use a key from
a keycard backup, you simply do gpg --card-edit
with the key plugged
in, then type the fetch
command at the prompt to fetch the public key
that corresponds to the private key on the keycard (which stays on the
keycard). This is the same procedure as the one to use the secret key
on another
computer.
This article first appeared in the Linux Weekly News.
2005/06 | 2006/07 | 2007/08 | 2008/09 | 2009/10 | 2010/11 | 2011/12 | 2012/13 | 2013/14 | 2014/15 | 2015/16 | 2016/17 | |
---|---|---|---|---|---|---|---|---|---|---|---|---|
number of (partial) days | 25 | 17 | 29 | 37 | 30 | 30 | 25 | 23 | 30 | 24 | 17 | 30 |
Dam ls | 10 | 10 | 5 | 10 | 16 | 23 | 10 | 4 | 29 | 9 | 4 | 4 |
Diedamskopf | 15 | 4 | 24 | 23 | 13 | 4 | 14 | 19 | 1 | 13 | 12 | 23 |
Warth/Schr cken | 0 | 3 | 0 | 4 | 1 | 3 | 1 | 0 | 0 | 2 | 1 | 3 |
total meters of altitude | 124634 | 74096 | 219936 | 226774 | 202089 | 203918 | 228588 | 203562 | 274706 | 224909 | 138037 | 269819 |
highscore | 10247m | 8321m | 12108m | 11272m | 11888m | 10976m | 13076m | 13885m | 12848m | 13278 | 11015 | 12245 |
# of runs | 309 | 189 | 503 | 551 | 462 | 449 | 516 | 468 | 597 | 530 | 354 | 634 |
It's as though the implicit assumptions are that everybody backs all of their stuff up to at least two different devices and backups in the cloud in at least two separate countries. Well, people don't always have perfect backups. In fact, they usually don't have any.It goes on to argue that, when you lose your password: "You lose everything. You lose your own identity." The stateless nature of password hashers also means you do not need to use cloud services to synchronize your passwords, as there is (generally, more on that later) no state to carry around. This means, for example, that the list of accounts that you have access to is only stored in your head, and not in some online database that could be hacked without your knowledge. The downside of this is, of course, that attackers do not actually need to have access to your password hasher to start cracking it: they can try to guess your master key without ever stealing anything from you other than a single token you used to log into some random web site. Password hashers also necessarily generate unique passwords for every site you use them on. While you can also do this with password managers, it is not an enforced decision. With hashers, you get distinct and strong passwords for every site with no effort.
Math.random()
calls, which
are not considered cryptographically secure.)
Basically, as stated by Julian Morrison in this discussion:
A password is now ciphertext, not a block of line noise. Every time you transmit it, you are giving away potential clues of use to an attacker. [...] You only have one password for all the sites, really, underneath, and it's your secret key. If it's broken, it's now a skeleton-key [...]Newer implementations like LessPass and Master Password fix this by using reasonable key derivation algorithms (PBKDF2 and scrypt, respectively) that are more resistant to offline cracking attacks, but who knows how long those will hold? To give a concrete example, if you would like to use the new winner of the password hashing competition (Argon2) in your password manager, you can patch the program (or wait for an update) and re-encrypt your database. With a password hasher, it's not so easy: changing the algorithm means logging in to every site you visited and changing the password. As someone who used a password hasher for a few years, I can tell you this is really impractical: you quickly end up with hundreds of passwords. The LessPass developers tried to facilitate this, but they ended up mostly giving up. Which brings us to the question of state. A lot of those tools claim to work "without a server" or as being "stateless" and while those claims are partly true, hashers are way more usable (and more secure, with profile secrets) when they do keep some sort of state. For example, Password Hasher Plus records, in your browser profile, which site you visited and which settings were used on each site, which makes it easier to comply with weird password policies. But then that state needs to be backed up and synchronized across multiple devices, which led LessPass to offer a service (which you can also self-host) to keep those settings online. At this point, a key benefit of the password hasher approach (not keeping state) just disappears and you might as well use a password manager. Another issue with password hashers is choosing the right one from the start, because changing software generally means changing the algorithm, and therefore changing passwords everywhere. If there was a well-established program that was be recognized as a solid cryptographic solution by the community, I would feel more confident. But what I have seen is that there are a lot of different implementations each with its own warts and flaws; because changing is so painful, I can't actually use any of those alternatives. All of the password hashers I have reviewed have severe security versus usability tradeoffs. For example, LessPass has what seems to be a sound cryptographic implementation, but using it requires you to click on the icon, fill in the fields, click generate, and then copy the password into the field, which means at least four or five actions per password. The venerable Password Hasher is much easier to use, but it makes you type the master password directly in the site's password form, so hostile sites can simply use JavaScript to sniff the master password while it is typed. While there are workarounds implemented in Password Hasher Plus (the profile-specific secret), both tools are more or less abandoned now. The Password Hasher homepage, linked from the extension page, is now a 404. Password Hasher Plus hasn't seen a release in over a year and there is no space for collaborating on the software the homepage is simply the author's Google+ page with no information on the project. I couldn't actually find the source online and had to download the Chrome extension by hand to review the source code. Software abandonment is a serious issue for every project out there, but I would argue that it is especially severe for password hashers. Furthermore, I have had difficulty using password hashers in unified login environments like Wikipedia's or StackExchange's single-sign-on systems. Because they allow you to log in with the same password on multiple sites, you need to choose (and remember) what label you used when signing in. Did I sign in on stackoverflow.com? Or was it stackexchange.com? Also, as mentioned in the previous article about password managers, web-based password managers have serious security flaws. Since more than a few password hashers are implemented using bookmarklets, they bring all of those serious vulnerabilities with them, which can range from account name to master password disclosures. Finally, some of the password hashers use dubious crypto primitives that were valid and interesting a decade ago, but are really showing their age now. Stanford's pwdhash uses MD5, which is considered "cryptographically broken and unsuitable for further use". We have seen partial key recovery attacks against MD5 already and while those do not allow an attacker to recover the full master password yet (especially not with HMAC-MD5), I would not recommend anyone use MD5 in anything at this point, especially if changing that algorithm later is hard. Some hashers (like Password Hasher and Password Plus) use a a single round of SHA-1 to derive a token from a password; WPA2 (standardized in 2004) uses 4096 iterations of HMAC-SHA1. A recent US National Institute of Standards and Technology (NIST) report also recommends "at least 10,000 iterations of the hash function".
Note: this article first appeared in the Linux Weekly News. Also, details of my research into password hashers are available in the password hashers history article.
Next.